In computer vision, finding point correspondence among images plays an important role in many applications, such as image stitching, image retrieval, visual localization, etc. Most of the research worksfocus on the matching of local feature before a sampling method is employed, such as RANSAC, to verify initial matching results via repeated fitting of certain global transformation among the images. However, incorrect matches may still exist, while careful examination of such problems is often skipped. Accordingly, a geometrically constrained algorithm is proposed in this work to verify the correctness of initially matched SIFT keypoints based on view-invariant cross-ratios (CRs). By randomly forming pentagons from these keypoints and matching their shape and location among images with CRs, robust planar region estimation can be achieved efficiently for the above verification, while correct and incorrect matches of keypoints can be examined easily with respect to those shape and location matched pentagons. Experimental results show that satisfactory results can be obtained for various scenes with single as well as multiple planar regions.
translated by 谷歌翻译
尽管深入学习对监督点云语义细分的成功取得了成功,但获得大规模的逐点手动注释仍然是一个重大挑战。为了减轻巨大的注释负担,我们提出了一个基于区域和多样性的积极学习(REDAL),这是许多深度学习方法的一般框架,旨在自动选择用于标签获取的信息丰富和多样化的子场所。观察到只有一小部分带注释的区域足以通过深度学习的方式理解3D场景,我们使用SoftMax熵,颜色不连续性和结构复杂性来衡量子场所区域的信息。还开发了一种多样性的选择算法,以避免通过在查询批次中选择信息性但相似的区域而产生的多余注释。广泛的实验表明,我们的方法的表现高于先前的活跃学习策略,并且我们达到了90%的全面监督学习,而S3DIS和Semantickitti数据集则需要不到15%和5%的注释。我们的代码可在https://github.com/tsunghan-wu/redal上公开获取。
translated by 谷歌翻译
由于缺乏大规模标记的3D数据集,大多数3D神经网络都是从划痕训练。在本文中,我们通过利用来自丰富的2D数据集学习的2D网络来介绍一种新的3D预预测方法。我们提出了通过将像素级和点级别特征映射到同一嵌入空间中的对比度的像素到点知识转移来有效地利用2D信息。由于2D和3D网络之间的异构性质,我们介绍了后投影功能以对准2D和3D之间的功能以使转移成为可能。此外,我们设计了一个上采样功能投影层,以增加高级2D特征图的空间分辨率,这使得能够学习细粒度的3D表示。利用普雷累染的2D网络,所提出的预介绍过程不需要额外的2D或3D标记数据,进一步缓解了昂贵的3D数据注释成本。据我们所知,我们是第一个利用现有的2D培训的权重,以预先rain 3D深度神经网络。我们的密集实验表明,使用2D知识预订的3D模型可以通过各种真实世界3D下游任务进行3D网络的性能。
translated by 谷歌翻译
深度估计功能有助于3D识别。商品级深度摄像机能够实时捕获深度和颜色图像。但是,传感器不能正确扫描光泽,透明或远处的表面。结果,感应深度的增强和恢复是一项重要任务。深度完成旨在填补传感器无法检测到的孔,这对于机器学习仍然是一项复杂的任务。传统的手工调整方法已达到其极限,而基于神经网络的方法倾向于从周围的深度值中复制和插入输出。这导致边界模糊,深度图的结构丢失了。因此,我们的主要工作是设计一个端到端网络,以改善完成深度图,同时保持边缘清晰度。我们利用以前在图像介入字段中使用的自我发项机制在每层卷积层中提取更多有用的信息,从而增强了完整的深度图。此外,我们提出了边界一致性概念,以增强深度图质量和结构。实验结果验证了我们的自我注意力和边界一致性模式的有效性,这表现优于先前在Matterport3D数据集上的最新深度完成工作。我们的代码可在https://github.com/tsunghan-wu/depth-completion上公开获取。
translated by 谷歌翻译
我们介绍360-DFPE,一个顺序楼层平面图估计方法,直接将360图像视为输入,而不依赖于有源传感器或3D信息。我们的方法利用单眼视觉SLAM解决方案和单眼360室布局方法之间的松散耦合集成,分别估计相机姿势和布局几何形状。由于我们的任务是使用单眼图像,整个场景结构,房间实例和房间形状顺序捕获平面图。为了解决这些挑战,我们首先通过制定熵最小化过程来处理视觉内径和布局几何形状之间的比例差异,这使我们能够直接对准360布局而不提前了解整个场景。其次,为了顺序识别各个房间,我们提出了一种新颖的室内识别算法,其使用几何信息沿着相机探索跟踪每个房间。最后,为了估算房间的最终形状,我们提出了一种最短的路径算法,具有迭代的粗细策略,这改善了具有更高精度和更快的运行时间的现有制剂。此外,我们收集一个具有具有挑战性的大型场景的新楼层规划数据集,提供了点云和顺序360图像信息。实验结果表明,我们的单眼解决方案实现了依赖于活动传感器的当前最先进的算法的良好性能,并提前要求整个场景重建数据。我们的代码和数据集将很快发布。
translated by 谷歌翻译
异常意识是安全关键型应用的重要能力,如自主驾驶。虽然最近的机器人和计算机视觉的进展使得对图像分类的异常检测,但对语义细分的异常检测不太探讨。传统的异常感知系统假设其他现有类作为用于训练模型的分发(伪未知)类的类将导致两个缺点。 (1)未知类,需要应对哪些应用程序,在培训时间内实际上无法实际存在。 (2)模型性能强烈依赖课堂选择。观察这一点,我们提出了一种新的合成未知数据生成,打算解决异常感知语义分割任务。我们设计一个新的蒙版渐变更新(MGU)模块,以沿着分布边界生成辅助数据。此外,我们修改了传统的跨熵损失,强调边界数据点。我们在两个异常分段数据集上达到最先进的性能。消融研究还证明了所提出的模块的有效性。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译